7 research outputs found

    New elementary components of the Gorenstein locus of the Hilbert scheme of points

    Full text link
    We construct new explicit examples of nonsmoothable Gorenstein algebras with Hilbert function (1,n,n,1)(1,n,n,1). This gives a new infinite family of elementary components in the Gorenstein locus of the Hilbert scheme of points and solves the cubic case of Iarrobino's conjecture.Comment: 18 page

    A High-Frequency Load-Store Queue with Speculative Allocations for High-Level Synthesis

    Full text link
    Dynamically scheduled high-level synthesis (HLS) enables the use of load-store queues (LSQs) which can disambiguate data hazards at circuit runtime, increasing throughput in codes with unpredictable memory accesses. However, the increased throughput comes at the price of lower clock frequency and higher resource usage compared to statically scheduled circuits without LSQs. The lower frequency often nullifies any throughput improvements over static scheduling, while the resource usage becomes prohibitively expensive with large queue sizes. This paper presents a method for achieving dynamically scheduled memory operations in HLS without significant clock period and resource usage increase. We present a novel LSQ based on shift-registers enabled by the opportunity to specialize queue sizes to a target code in HLS. We show a method to speculatively allocate addresses to our LSQ, significantly increasing pipeline parallelism in codes that could not benefit from an LSQ before. In stark contrast to traditional load value speculation, we do not require pipeline replays and have no overhead on misspeculation. On a set of benchmarks with data hazards, our approach achieves an average speedup of 11×\times against static HLS and 5×\times against dynamic HLS that uses a state of the art LSQ from previous work. Our LSQ also uses several times fewer resources, scaling to queues with hundreds of entries, and supports both on-chip and off-chip memory.Comment: To appear in the International Conference on Field Programmable Technology (FPT'23), Yokohama, Japan, 11-14 December 202

    Compiler Discovered Dynamic Scheduling of Irregular Code in High-Level Synthesis

    Full text link
    Dynamically scheduled high-level synthesis (HLS) achieves higher throughput than static HLS for codes with unpredictable memory accesses and control flow. However, excessive dataflow scheduling results in circuits that use more resources and have a slower critical path, even when only a part of the circuit exhibits dynamic behavior. Recent work has shown that marking parts of a dataflow circuit for static scheduling can save resources and improve performance (hybrid scheduling), but the dynamic part of the circuit still bottlenecks the critical path. We propose instead to selectively introduce dynamic scheduling into static HLS. This paper presents an algorithm for identifying code regions amenable to dynamic scheduling and shows a methodology for introducing dynamically scheduled basic blocks, loops, and memory operations into static HLS. Our algorithm is informed by modulo-scheduling and can be integrated into any modulo-scheduled HLS tool. On a set of ten benchmarks, we show that our approach achieves on average an up to 3.7×\times and 3×\times speedup against dynamic and hybrid scheduling, respectively, with an area overhead of 1.3×\times and frequency degradation of 0.74×\times when compared to static HLS.Comment: To appear in the 33rd International Conference on Field-Programmable Logic and Applications (2023
    corecore